Goto

Collaborating Authors

 alignment goal


CBF-LLM: Safe Control for LLM Alignment

Miyaoka, Yuya, Inoue, Masaki

arXiv.org Artificial Intelligence

While large language models (LLMs) are known to have strong language understanding and generation abilities, they can also generate harmful, biased, and toxic content [1][2]. Alignment of LLMs ensures that they generate content that is "desirable" for the user, typically meaning content that is safe and ethical. Various approaches for LLM alignment have been presented ([1], [2], [3] and reference therein). The major approach to the alignment is reinforcement learning from human feedback (RLHF) [4], where a reward model is constructed by human feedback and used for the training of LLMs. Variants of RLHF architectures are also proposed, such as Safe-RLHF [5], SENSEI [6], and f-DPG [7], and their implementations are presented, such as training pre-trained LLMs [8][9], and applications like information-seeking chatbot [10].


From Instructions to Intrinsic Human Values -- A Survey of Alignment Goals for Big Models

Yao, Jing, Yi, Xiaoyuan, Wang, Xiting, Wang, Jindong, Xie, Xing

arXiv.org Artificial Intelligence

Big models, exemplified by Large Language Models (LLMs), are models typically pre-trained on massive data and comprised of enormous parameters, which not only obtain significantly improved performance across diverse tasks but also present emergent capabilities absent in smaller models. However, the growing intertwining of big models with everyday human lives poses potential risks and might cause serious social harm. Therefore, many efforts have been made to align LLMs with humans to make them better follow user instructions and satisfy human preferences. Nevertheless, `what to align with' has not been fully discussed, and inappropriate alignment goals might even backfire. In this paper, we conduct a comprehensive survey of different alignment goals in existing work and trace their evolution paths to help identify the most essential goal. Particularly, we investigate related works from two perspectives: the definition of alignment goals and alignment evaluation. Our analysis encompasses three distinct levels of alignment goals and reveals a goal transformation from fundamental abilities to value orientation, indicating the potential of intrinsic human values as the alignment goal for enhanced LLMs. Based on such results, we further discuss the challenges of achieving such intrinsic value alignment and provide a collection of available resources for future research on the alignment of big models.